Fine-Grain Cache Resizing

نویسندگان

  • Michael Zhang
  • Krste Asanović
چکیده

Progress: We have been investigating the use of dynamic cache resizing techniques for energy reduction. The idea of cache resizing is to match the active cache size with the current program cache usage requirements. When the cache usage is small, we can turn off parts of the cache. Neither active switching energy nor static leakage energy is dissipated in the part of the cache that is turned off. Previous cache resizing techniques are designed for RAM-tag caches, where cache tags are held in RAM structures. However, commercial low-power microprocessors use CAMtag caches, where the cache tags are held in Content Addressable Memory [2, 5]. CAM-tag caches are popular in low-power processors because they provide high associativity, which avoids expensive cache misses, and results in lower overall energy [3].

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Multi-Level Cache Resizing

Hardware designers are constantly looking for ways to squeeze waste out of architectures to achieve better power efficiency. Cache resizing is a technique that can remove wasteful power consumption in caches. The idea is to determine the minimum cache a program needs to run at near-peak performance, and then reconfigure the cache to implement this efficient capacity. While there has been signif...

متن کامل

Batch Resizing Policies and Techniques for Fine- Grain Grid Tasks: The Nuts and Bolts

The overhead of processing fine-grain tasks on a grid induces the need for batch processing or task group deployment in order to minimise overall application turnaround time. When deciding the granularity of a batch, the processing requirements of each task should be considered as well as the utilisation constraints of the interconnecting network and the designated resources. However, the dynam...

متن کامل

Exploiting Choice in Resizable Cache Design to Optimize Deep-Submicron Processor Energy-Delay

Cache memories account for a significant fraction of a chip’s overall energy dissipation. Recent research advocates using “resizable” caches to exploit cache requirement variability in applications to reduce cache size and eliminate energy dissipation in the cache’s unused sections with minimal impact on performance. Current proposals for resizable caches fundamentally vary in two design aspect...

متن کامل

Cache-Affinity Scheduling for Fine Grain Multithreading

Cache utilisation is often very poor in multithreaded applications, due to the loss of data access locality incurred by frequent context switching. This problem is compounded on shared memory multiprocessors when dynamic load balancing is introduced and thread migration disrupts cache content. In this paper, we present a technique, which we refer to as ‘batching’, for reducing the negative impa...

متن کامل

Near fine grain parallel processing using a multiprocessor with MAPLE

Multi-grain parallelizing scheme is one of effective parallelizing schemes which exploits various level parallelism: coarse-grain(macro-dataflow), medium-grain(loop level parallelizing) and near-fine-grain(statements parallelizing) from a sequential program. A multi-processor ASCA is designed for efficient execution of multi-grain parallelizing program. A processing element called MAPLE are mai...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2003